Beyond the Raw Request
When starting with Large Language Models (LLMs), developers typically use direct API calls (like the OpenAI Python library) to send a prompt and receive a completion. While functional, this approach becomes unmanageable as applications scale.
The Problem of Statelessness
Large Language Models are inherently stateless. Every time you send a message, the model "forgets" who you are and what you previously said. Each interaction is a blank slate. To maintain a conversation, you must manually pass the entire history back to the model every single time.
The LangChain Solution
LangChain introduces the ChatOpenAI model wrapper. This isn't just a wrapper for the sake of itโit is the foundation for modularity. By abstracting the model call, we can later swap models, inject memory, and use templates without rewriting our entire codebase.
Your task is to create a
ChatOpenAI instance named my_llm with a temperature of 0.7 to allow for more creative (non-deterministic) responses.
from langchain_openai import ChatOpenAImy_llm = ChatOpenAI(temperature=0.7)